|
Extreme learning machines are feedforward neural network for classification or regression with a single layer of hidden nodes, where the weights connecting inputs to hidden nodes are randomly assigned and never updated. These weights between hidden nodes and outputs are learned in a single step, which essentially amounts to learning a linear model. The name "extreme learning machine" (ELM) was given to such models by Guang-Bin Huang. These models can produce good generalization performance and learn thousands of times faster than networks trained using backpropagation. ==Algorithm== The simplest ELM training algorithm learns a model of the form : where is the matrix of input-to-hidden-layer weights, is some activation function, and is the matrix of hidden-to-output-layer weights. The algorithm proceeds as follows: # Fill with Gaussian random noise; # estimate by least-squares fit to a matrix of response variables , computed using the pseudoinverse , given a design matrix : #: 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Extreme learning machine」の詳細全文を読む スポンサード リンク
|